该论文提出了一个计划者,以使用质心动力学和人形机器人的完整运动学来产生步行轨迹。机器人与行走表面之间的相互作用是通过新条件明确建模的,即\ emph {动态互补性约束}。该方法不需要预定义的接触序列,并自动生成脚步。我们通过一组任务来表征机器人控制目标,并通过解决最佳控制问题来解决它。我们表明,可以通过指定最小的参考集,例如恒定所需的质量速度中心和地面上的参考点来自动实现行走运动。此外,我们分析了接触模型选择如何影响计算时间。我们通过为人形机器人ICUB生成和测试步行轨迹来验证该方法。
translated by 谷歌翻译
跳跃可能是克服小地形差距或障碍的有效运动方法。在本文中,我们提出了两种不同的方法,可以用类人形机器人进行跳跃。具体而言,从预定义的COM轨迹开始,我们开发了速度控制器的理论和基于优化技术评估关节输入的优化技术的扭矩控制器。在模拟和类人形机器人ICUB中,对控制器进行了测试。在模拟中,机器人能够使用两个控制器跳跃,而实际系统仅使用速度控制器跳跃。结果突出了控制质心动量的重要性,他们表明联合性能,即腿部和躯干关节的最大功率,以及低水平的控制性能是至关重要的,以实现可接受的结果。
translated by 谷歌翻译
本文采取了一步,为人形机器人提供自适应形态能力。我们提出了一种系统的方法,可以使机器人盖变形其形状,其整体尺寸适合人体机器人的人体测量值。更确切地说,我们提出了一个封面概念,该概念由两个主要组成部分组成:骨骼,这是一个称为Node的基本元素和一个软膜的重复,该元素将盖子包裹起来并用其运动构成变形。本文重点关注盖子骨骼,并解决了节点设计,系统建模,电动机定位以及变形系统的控制设计的挑战性问题。封面建模侧重于运动学,并提出了定义系统运动限制的系统方法。然后,我们应用遗传算法来找到运动位置,以使变形盖完全致动。最后,我们提出了控制算法,使覆盖物变为随时间变化的形状。通过进行四个不同的方尺寸盖,分别具有3x3、4x8、8x8和20x20节点的运动学模拟来验证整个方法。对于每个封面,我们应用遗传算法来选择运动位置并执行模拟以跟踪所需形状。仿真结果表明,提出的方法可确保封面跟踪具有良好跟踪性能的所需形状。
translated by 谷歌翻译
Pre-trained language models (PLM) have advanced the state-of-the-art across NLP applications, but lack domain-specific knowledge that does not naturally occur in pre-training data. Previous studies augmented PLMs with symbolic knowledge for different downstream NLP tasks. However, knowledge bases (KBs) utilized in these studies are usually large-scale and static, in contrast to small, domain-specific, and modifiable knowledge bases that are prominent in real-world task-oriented dialogue (TOD) systems. In this paper, we showcase the advantages of injecting domain-specific knowledge prior to fine-tuning on TOD tasks. To this end, we utilize light-weight adapters that can be easily integrated with PLMs and serve as a repository for facts learned from different KBs. To measure the efficacy of proposed knowledge injection methods, we introduce Knowledge Probing using Response Selection (KPRS) -- a probe designed specifically for TOD models. Experiments on KPRS and the response generation task show improvements of knowledge injection with adapters over strong baselines.
translated by 谷歌翻译
The current trend of applying transfer learning from CNNs trained on large datasets can be an overkill when the target application is a custom and delimited problem with enough data to train a network from scratch. On the other hand, the training of custom and lighter CNNs requires expertise, in the from-scratch case, and or high-end resources, as in the case of hardware-aware neural architecture search (HW NAS), limiting access to the technology by non-habitual NN developers. For this reason, we present Colab NAS, an affordable HW NAS technique for producing lightweight task-specific CNNs. Its novel derivative-free search strategy, inspired by Occam's razor, allows it to obtain state-of-the-art results on the Visual Wake Word dataset in just 4.5 GPU hours using free online GPU services such as Google Colaboratory and Kaggle Kernel.
translated by 谷歌翻译
In this paper we propose a general approach to define a many-valued preferential interpretation of gradual argumentation semantics. The approach allows for conditional reasoning over arguments and boolean combination of arguments, with respect to a class of gradual semantics, through the verification of graded (strict or defeasible) implications over a preferential interpretation. As a proof of concept, in the finitely-valued case, an Answer set Programming approach is proposed for conditional reasoning in a many-valued argumentation semantics of weighted argumentation graphs. The paper also develops and discusses a probabilistic semantics for gradual argumentation, which builds on the many-valued conditional semantics.
translated by 谷歌翻译
Weather forecasting centers currently rely on statistical postprocessing methods to minimize forecast error. This improves skill but can lead to predictions that violate physical principles or disregard dependencies between variables, which can be problematic for downstream applications and for the trustworthiness of postprocessing models, especially when they are based on new machine learning approaches. Building on recent advances in physics-informed machine learning, we propose to achieve physical consistency in deep learning-based postprocessing models by integrating meteorological expertise in the form of analytic equations. Applied to the post-processing of surface weather in Switzerland, we find that constraining a neural network to enforce thermodynamic state equations yields physically-consistent predictions of temperature and humidity without compromising performance. Our approach is especially advantageous when data is scarce, and our findings suggest that incorporating domain expertise into postprocessing models allows to optimize weather forecast information while satisfying application-specific requirements.
translated by 谷歌翻译
We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
Pretrained transformer models have achieved state-of-the-art results in many tasks and benchmarks recently. Many state-of-the-art Language Models (LMs), however, do not scale well above the threshold of 512 input tokens. In specialized domains though (such as legal, scientific or biomedical), models often need to process very long text (sometimes well above 10000 tokens). Even though many efficient transformers have been proposed (such as Longformer, BigBird or FNet), so far, only very few such efficient models are available for specialized domains. Additionally, since the pretraining process is extremely costly in general - but even more so as the sequence length increases - it is often only in reach of large research labs. One way of making pretraining cheaper is the Replaced Token Detection (RTD) task, by providing more signal during training, since the loss can be computed over all tokens. In this work, we train Longformer models with the efficient RTD task on legal data to showcase that pretraining efficient LMs is possible using much less compute. We evaluate the trained models on challenging summarization tasks requiring the model to summarize long texts to show to what extent the models can achieve good performance on downstream tasks. We find that both the small and base models outperform their baselines on the in-domain BillSum and out-of-domain PubMed tasks in their respective parameter range. We publish our code and models for research purposes.
translated by 谷歌翻译
Synthetic data generation has recently gained widespread attention as a more reliable alternative to traditional data anonymization. The involved methods are originally developed for image synthesis. Hence, their application to the typically tabular and relational datasets from healthcare, finance and other industries is non-trivial. While substantial research has been devoted to the generation of realistic tabular datasets, the study of synthetic relational databases is still in its infancy. In this paper, we combine the variational autoencoder framework with graph neural networks to generate realistic synthetic relational databases. We then apply the obtained method to two publicly available databases in computational experiments. The results indicate that real databases' structures are accurately preserved in the resulting synthetic datasets, even for large datasets with advanced data types.
translated by 谷歌翻译